non-asymptotic error bound
Supplementary Material for " Non-Asymptotic Error Bounds for Bidirectional GANs "
Department of Mathematics, The Hong Kong University of Science and Technology Clear Water Bay, Hong Kong, China yyangdc@connect.ust.hk In this supplementary material, we first prove Theorem 3.2, and then Theorems 3.1 and 3.3. We use σ to denote the ReLU activation function in neural networks, which is σ (x) = max {x, 0}. We use notation O () and O () to express the order of function slightly differently, where O () omits the universal constant not relying on d while O () omits the constant related to d . So far, most of the related works assume that the target distribution µ is supported on a compact set, for example Chen et al. (2020) and Liang (2020).
- Asia > China > Hong Kong (0.45)
- North America > United States > Iowa > Johnson County > Iowa City (0.14)
- Asia > China > Hubei Province > Wuhan (0.04)
Non-asymptotic Error Bounds for Bidirectional GANs
We derive nearly sharp bounds for the bidirectional GAN (BiGAN) estimation error under the Dudley distance between the latent joint distribution and the data joint distribution with appropriately specified architecture of the neural networks used in the model. To the best of our knowledge, this is the first theoretical guarantee for the bidirectional GAN learning approach. An appealing feature of our results is that they do not assume the reference and the data distributions to have the same dimensions or these distributions to have bounded support. These assumptions are commonly assumed in the existing convergence analysis of the unidirectional GANs but may not be satisfied in practice. Our results are also applicable to the Wasserstein bidirectional GAN if the target distribution is assumed to have a bounded support.